Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Neural networks: Neural networks are computational models inspired by the human brain, designed to recognize patterns and solve complex problems. They consist of layers of interconnected nodes (analogous to neurons) that process input data and learn to perform tasks by adjusting the strength of connections based on feedback. Used extensively in machine learning, they enable applications like image recognition, language processing, and predictive analysis. See also Artificial Neural networks, Connectionism, Computer models, Computation, Artificial Intelligence, Machine learning.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

Terrence W. Deacon on Neural Networks - Dictionary of Arguments

I 130
Neural networks/learning/Deacon: the basic structure consists of three layers: input units, output units and hidden units (hidden units, middle layer) as well as their connections. The states of the nodes of the middle layer (0 or 1) are initially influenced by the input nodes. It is crucial that the strength of the compounds only emerges as a result of more frequent use. The connections are trained by comparing the success of the output signal (correct or wrong association) with the input.
Cf. >Learning
, >Learning/Hebb, >Input/Output.
I 131
This training corresponds to adapting to a stock of external forms of behaviour and is an analogy to learning. Such systems are much more capable of recognizing patterns than conventional programmed computers. When neural networks are trained to categorize stimuli, they can easily continue this when new stimuli occur. When it comes to incidental interference, they are superior to conventional computers...
I 132
... in reacting and not to reinforce problematic connections. I.e. they do not react in an all-or-nothing way. This is similar to the reaction of the nervous systems to damage.
>Machine learning.
Information processing within neural networks has been compared with holograms that have information available from several perspectives at the same time.
Short-term memory: can be simulated with recurrent networks (see J. Elman)(1). Incremental learning, or the importance of starting small. In 13th Annual Conference of the Cognitive Science Society, NJ, L. Erlbaum, 443-448). Former states of the hidden layer are entered and processed as new input.
Language acquisition/Elman: with this, language learning could be simulated: the problem of syntax learning was translated into the problem of mapping previous sequences to future input sequences. Incomplete sequences were completed by the system with the most likely additions. Initially, this involved the occurrence of 0 and 1, i.e. meanings were neglected.
Problem: Neural networks sometimes converge into suboptimal solutions because they only take local patterns into account.
Solution: in order to prevent the nets from being trapped in such "learning potholes", it is possible to install "noises" (random disturbances) that force the system to search for possible solutions in another area.
I 133
Language acquisition/Elman/Deacon: Elman kept different stages of learning more complex structures apart, so they could not interfere with each other.
>Language Acquisition.
I 134
Deacon: the production of grammatically correct forms was learned inductively without any grammar, let alone to presuppose a universal grammar.
>Universal grammar, >Grammar, >N. Chomsky.
I 135
N.B.: it was shown that the structure of the learning process has to do with what can and cannot be learned. More importantly, it suggests that the structure of the language and the way in which it has to be learned are related.
>Language/Deacon, >Brain/Deacon.

1. Elman, J. (1991): Incremental learning, or the importance of starting small. In: 13th Annual Conference oft he Cognitive Science Society, NJ, L. Erlbaum, 443-448.

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Dea I
T. W. Deacon
The Symbolic Species: The Co-evolution of language and the Brain New York 1998

Dea II
Terrence W. Deacon
Incomplete Nature: How Mind Emerged from Matter New York 2013


Send Link
> Counter arguments against Deacon
> Counter arguments in relation to Neural Networks

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-27
Legal Notice   Contact   Data protection declaration